37 research outputs found

    Visualizing 2D Flows with Animated Arrow Plots

    Full text link
    Flow fields are often represented by a set of static arrows to illustrate scientific vulgarization, documentary film, meteorology, etc. This simple schematic representation lets an observer intuitively interpret the main properties of a flow: its orientation and velocity magnitude. We propose to generate dynamic versions of such representations for 2D unsteady flow fields. Our algorithm smoothly animates arrows along the flow while controlling their density in the domain over time. Several strategies have been combined to lower the unavoidable popping artifacts arising when arrows appear and disappear and to achieve visually pleasing animations. Disturbing arrow rotations in low velocity regions are also handled by continuously morphing arrow glyphs to semi-transparent discs. To substantiate our method, we provide results for synthetic and real velocity field datasets

    Parallel extraction and simplification of large isosurfaces using an extended tandem algorithm

    Get PDF
    International audienceIn order to deal with the common trend in size increase of volumetric datasets, in the past few years research in isosurface extraction has focused on related aspects such as surface simplification and load-balanced parallel algorithms. We present a parallel, block-wise extension of the tandem algorithm by Attali et al., which simplifies on the fly an isosurface being extracted. Our approach minimizes the overall memory consumption using an adequate block splitting and merging strategy along with the introduction of a component dumping mechanism that drastically reduces the amount of memory needed for particular datasets such as those encountered in geophysics. As soon as detected, surface components are migrated to the disk along with a meta-data index (oriented bounding box, volume, etc.) that permits further improved exploration scenarios (small component removal or particularly oriented component selection for instance). For ease of implementation, we carefully describe a master and worker algorithm architecture that clearly separates the four required basic tasks. We show several results of our parallel algorithm applied on a geophysical dataset of size 7000 × 1600 × 2000

    Isosurface extraction and interpretation on very large datasets in geophysics

    Get PDF
    International audienceIn order to deal with the heavy trend in size increase of volumetric datasets, research in isosurface extraction has focused in the past few years on related aspects such as surface simplification and load balanced parallel algorithms. We present in this paper a parallel, bloc-wise extension of the tandem algorithm [Attali et al. 2005], which simplifies on the fly an isosurface being extracted. Our approach minimizes the overall memory consumption using an adequate bloc splitting and merging strategy and with the introduction of a component dumping mechanism that drastically reduces the amount of memory needed for particular datasets such as those encountered in geophysics. As soon as detected, surface components are migrated to the disk along with a meta-data index (oriented bounding box, volume, etc) that will allow further improved exploration scenarios (small components removal or particularly oriented components selection for instance). For ease of implementation, we carefully describe a master and slave algorithm architecture that clearly separates the four required basic tasks. We show several results of our parallel algorithm applied on a 7000×1600×2000 geophysics dataset

    Slimming Brick Cache Strategies for Seismic Horizon Propagation Algorithms

    Get PDF
    International audienceIn this paper, we propose a new bricked cache system suitable for a particular surface propagation algorithm : seismic horizon reconstruction. The application domain of this algorithm is the interpretation of seismic volumes used, for instance, by petroleum companies for oil prospecting. To ensure the optimality of such surface extraction, the algorithm must access randomly into the data volume. This lack of data locality imposes that the volume resides entirely in the main memory to reach decent performances. In case of volumes larger than the memory, we show that using a classical brick cache strategy can also produce good performances until a certain size. As the size of these volumes increases very quickly, and can now reach more than 200GB, we demonstrate that the performances of the classical algorithm are dramatically reduced when processed on standard workstation with a limited size of memory (currently 8GB to 16GB). In order to handle such large volumes, we introduce a new slimming brick cache strategy where bricks size evolves according to processed data : at each step of the algorithm, processed data could be removed from the cache. This new brick format allows to have a larger number of brick loaded in memory. We further improve the releasing mechanism by filling in priority the “holes” that appear in the surface during the propagation process. With this new cache strategy, horizons can be extracted into volumes that are up to 75 times the size of the available cache memory. We discuss the performances and results of this new approach applied on both synthetic and real data

    ISA and IBFVS: image space-based visualization of flow on surfaces

    Get PDF
    We present a side-by-side analysis of two recent image space approaches for the visualization of vector fields on surfaces. The two methods, Image Space Advection (ISA) and Image Based Flow Visualization for Curved Surfaces (IBFVS) generate dense representations of time-dependent vector fields with high spatio-temporal correlation. While the 3D vector fields are associated with arbitrary surfaces represented by triangular meshes, the generation and advection of texture properties is confined to image space. Fast frame rates are achieved by exploiting frame-to-frame coherency and graphics hardware. In our comparison of ISA and IBFVS we point out the strengths and weaknesses of each approach and give recommendations as to when and where they are best applied

    Testing for the Dual-Route Cascade Reading Model in the Brain: An fMRI Effective Connectivity Account of an Efficient Reading Style

    Get PDF
    Neuropsychological data about the forms of acquired reading impairment provide a strong basis for the theoretical framework of the dual-route cascade (DRC) model which is predictive of reading performance. However, lesions are often extensive and heterogeneous, thus making it difficult to establish precise functional anatomical correlates. Here, we provide a connective neural account in the aim of accommodating the main principles of the DRC framework and to make predictions on reading skill. We located prominent reading areas using fMRI and applied structural equation modeling to pinpoint distinct neural pathways. Functionality of regions together with neural network dissociations between words and pseudowords corroborate the existing neuroanatomical view on the DRC and provide a novel outlook on the sub-regions involved. In a similar vein, congruent (or incongruent) reliance of pathways, that is reliance on the word (or pseudoword) pathway during word reading and on the pseudoword (or word) pathway during pseudoword reading predicted good (or poor) reading performance as assessed by out-of-magnet reading tests. Finally, inter-individual analysis unraveled an efficient reading style mirroring pathway reliance as a function of the fingerprint of the stimulus to be read, suggesting an optimal pattern of cerebral information trafficking which leads to high reading performance

    Multiresolution flow visualization

    Get PDF
    Flow visualization has been an active research field for several years and various techniques have been proposed to visualize vector fields, streamlines and textures being the most effective and popular ones. While streamlines are suitable to get rough information on the behavior of the flow, textures depict the flow properties at the pixel level. Depending on the situation the suitable representation could be streamlines or texture. This paper presents a method to compute a sequence of streamline-based images of a vector field with different densities, ranging from sparse to texturelike representations. It is based on an effective streamline placement algorithm and a production scheme that recalls those used in the multiresolution theory. Indeed a streamline defined at level J of the hierarchy is defined for all levels J’>J. A viewer allows us to interactively select the desired density while zooming in and out in a vector field. The density of streamlines in the image can also be automatically computed as a function of a derived quantity, such as velocity or vorticity

    Progressive Transmission of Appearance Preserving Octree-Textures

    No full text
    International audienceThe development of shape repositories and 3D databases rises the need of online visualization of 3D objects. The main issue with the remote visualization of large meshes is the transfer latency of the geometric information. The remote viewer requires the transfer of all polygons before allowing object's manipulation. To avoid this latency problem, an approach is to send several levels of details of the same object so that lighter versions can be displayed sooner and replaced with more detailed version later on. This strategy requires more bandwidth, implies abruptly changes in object aspect as the geometry refines as well as a non negligible precomputing time. Since the appearance of a 3D model is more influenced by its normal field than its geometry, we propose a framework in which the object's LOD is replaced with a single simplified mesh with a LOD of appearance. By using Appearance Preserving Octree-Textures (APO), this appearance LOD is encoded in a unique texture, and the details are progressively downloaded when they are needed. Our APO-based framework achieves a nearly immediate object rendering while details are transmitted and smoothly added to the texture. Scenes keep a low geometry complexity while being displayed at interactive framerate with a maximum of visual details, leading to a better visual quality over bandwith ratio than pure geometric LOD schemes. Our implementation is platform independent, as it uses JOGL and runs on a simple web browser. Furthermore the framework doesn't require processing on the server side during the client rendering

    Seismic image restoration using nonlinear least squares shape optimization

    Get PDF
    International audienceIn this article we present a new method for seismic image restoration. When observed, a seismic image is the result of an initial deposit system that has been transformed by a set of successive geological deformations (flexures, fault slip, etc) that occurred over a large period of time. The goal of seismic restoration consists in inverting the deformations to provide a resulting image that depicts the geological deposit system as it was in a previous state. Providing a tool that quickly generates restored images helps the geophysicists to recognize geological features that may be too strongly altered in the observed image. The proposed approach is based on a minimization process that expresses geological deformations in terms of geometrical constraints. We use a quickly converging Gauss-Newton approach to solve the system. We provide results to illustrate the seismic image restoration process on real data and present how the restored version can be used in a geological interpretation framework

    Appearance Preserving Octree-Textures

    Get PDF
    International audienceBecause of their geometric complexity, high resolution 3D models, either designed in high-end modeling packages or acquired with range scanning devices, cannot be directly used in applications that require rendering at interactive framerates. One clever method to overcome this limitation is to perform an appearance preserving geometry simplification, by replacing the original model with a low resolution mesh equipped with high resolution normal maps. This process visually preserves small scale features from the initial geometry, while only requiring a reduced set of polygons. However, this conversion usually relies on some kind of global or piecewise parameterization, combined with the generation of a texture atlas, a process that is computationally expensive and requires precise user supervision. In this paper, we propose an alternative method in which the normal field of a high resolution model is adaptively sampled and encoded in an octree-based data structure, that we call appearance preserving octree-texture (APO). Our main contributions are: a normal-driven octree generation, a compact encoding and an efficient look-up algorithm. Our method is efficient, totally automatic, and avoids the expensive creation of a parameterization with its corresponding texture atlas
    corecore